37 research outputs found
Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks
Early detection of pulmonary cancer is the most promising way to enhance a
patient's chance for survival. Accurate pulmonary nodule detection in computed
tomography (CT) images is a crucial step in diagnosing pulmonary cancer. In
this paper, inspired by the successful use of deep convolutional neural
networks (DCNNs) in natural image recognition, we propose a novel pulmonary
nodule detection approach based on DCNNs. We first introduce a deconvolutional
structure to Faster Region-based Convolutional Neural Network (Faster R-CNN)
for candidate detection on axial slices. Then, a three-dimensional DCNN is
presented for the subsequent false positive reduction. Experimental results of
the LUng Nodule Analysis 2016 (LUNA16) Challenge demonstrate the superior
detection performance of the proposed approach on nodule detection(average
FROC-score of 0.891, ranking the 1st place over all submitted results).Comment: MICCAI 2017 accepte
S4ND: Single-Shot Single-Scale Lung Nodule Detection
The state of the art lung nodule detection studies rely on computationally
expensive multi-stage frameworks to detect nodules from CT scans. To address
this computational challenge and provide better performance, in this paper we
propose S4ND, a new deep learning based method for lung nodule detection. Our
approach uses a single feed forward pass of a single network for detection and
provides better performance when compared to the current literature. The whole
detection pipeline is designed as a single Convolutional Neural Network
(CNN) with dense connections, trained in an end-to-end manner. S4ND does not
require any further post-processing or user guidance to refine detection
results. Experimentally, we compared our network with the current
state-of-the-art object detection network (SSD) in computer vision as well as
the state-of-the-art published method for lung nodule detection (3D DCNN). We
used publically available CT scans from LUNA challenge dataset and showed
that the proposed method outperforms the current literature both in terms of
efficiency and accuracy by achieving an average FROC-score of . We also
provide an in-depth analysis of our proposed network to shed light on the
unclear paradigms of tiny object detection.Comment: Accepted for publication at MICCAI 2018 (21st International
Conference on Medical Image Computing and Computer Assisted Intervention
CASED: Curriculum Adaptive Sampling for Extreme Data Imbalance
We introduce CASED, a novel curriculum sampling algorithm that facilitates
the optimization of deep learning segmentation or detection models on data sets
with extreme class imbalance. We evaluate the CASED learning framework on the
task of lung nodule detection in chest CT. In contrast to two-stage solutions,
wherein nodule candidates are first proposed by a segmentation model and
refined by a second detection stage, CASED improves the training of deep nodule
segmentation models (e.g. UNet) to the point where state of the art results are
achieved using only a trivial detection stage. CASED improves the optimization
of deep segmentation models by allowing them to first learn how to distinguish
nodules from their immediate surroundings, while continuously adding a greater
proportion of difficult-to-classify global context, until uniformly sampling
from the empirical data distribution. Using CASED during training yields a
minimalist proposal to the lung nodule detection problem that tops the LUNA16
nodule detection benchmark with an average sensitivity score of 88.35%.
Furthermore, we find that models trained using CASED are robust to nodule
annotation quality by showing that comparable results can be achieved when only
a point and radius for each ground truth nodule are provided during training.
Finally, the CASED learning framework makes no assumptions with regard to
imaging modality or segmentation target and should generalize to other medical
imaging problems where class imbalance is a persistent problem.Comment: 20th International Conference on Medical Image Computing and Computer
Assisted Intervention 201
Deep Lesion Graphs in the Wild: Relationship Learning and Organization of Significant Radiology Image Findings in a Diverse Large-scale Lesion Database
Radiologists in their daily work routinely find and annotate significant
abnormalities on a large number of radiology images. Such abnormalities, or
lesions, have collected over years and stored in hospitals' picture archiving
and communication systems. However, they are basically unsorted and lack
semantic annotations like type and location. In this paper, we aim to organize
and explore them by learning a deep feature representation for each lesion. A
large-scale and comprehensive dataset, DeepLesion, is introduced for this task.
DeepLesion contains bounding boxes and size measurements of over 32K lesions.
To model their similarity relationship, we leverage multiple supervision
information including types, self-supervised location coordinates and sizes.
They require little manual annotation effort but describe useful attributes of
the lesions. Then, a triplet network is utilized to learn lesion embeddings
with a sequential sampling strategy to depict their hierarchical similarity
structure. Experiments show promising qualitative and quantitative results on
lesion retrieval, clustering, and classification. The learned embeddings can be
further employed to build a lesion graph for various clinically useful
applications. We propose algorithms for intra-patient lesion matching and
missing annotation mining. Experimental results validate their effectiveness.Comment: Accepted by CVPR2018. DeepLesion url adde
Memory-Centric Accelerator Design for Convolutional Neural Networks
In the near future, cameras will be used everywhere as flexible sensors for numerous applications. For mobility and privacy reasons, the required image processing should be local on embedded computer platforms with performance requirements and energy constraints. Dedicated acceleration of Convolutional Neural Networks (CNN) can achieve these targets with enough flexibility to perform multiple vision tasks. A challenging problem for the design of efficient accelerators is the limited amount of external memory bandwidth. We show that the effects of the memory bottleneck can be reduced by a flexible memory hierarchy that supports the complex data access patterns in CNN workload. The efficiency of the on-chip memories is maximized by our scheduler that uses tiling to optimize for data locality. Our design flow ensures that on-chip memory size is minimized, which reduces area and energy usage. The design flow is evaluated by a High Level Synthesis implementation on a Virtex 6 FPGA board. Compared to accelerators with standard scratchpad memories the FPGA resources can be reduced up to 13× while maintaining the same performance. Alternatively, when the same amount of FPGA resources is used our accelerators are up to 11× faster
Evaluation and comparison of textural feature representation for the detection of early stage cancer in endoscopy
Esophageal cancer is the fastest rising type of cancer in the Western world. The novel technology of High Definition (HD) endoscopy enables physicians to find texture patterns related to early cancer. It encourages the development of a Computer-Aided Decision (CAD) system in order to help physicians with faster identification of early cancer and decrease the miss rate. However, an appropriate texture feature extraction, which is needed for classification, has not been studied yet. In this paper, we compare several techniques for texture feature extraction, including co-occurrence matrix features, LBP and Gabor features and evaluate their performance in detecting early stage cancer in HD endoscopic images. In order to exploit more image characteristics, we introduce an efficient combination of the texture and color features. Furthermore, we add a specific preprocessing step designed for endoscopy images, which improves the classification accuracy. After reducing the feature dimensionality using Principal Component Analysis (PCA), we classify selected features with a Support Vector Machine (SVM). The experimental results validated by an expert gastroenterologist show that the proposed feature extraction is promising and reaches a classification accuracy up to 96.48%